Search Results

Search found 21028 results on 842 pages for 'single player'.

Page 488/842 | < Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >

  • AJAX driven "page complete" function? Am I doing it right?

    - by Julian H. Lam
    This one might get me slaughtered, since I'm pretty sure it's bad coding practice, so here goes: I have an AJAX driven site which loads both content and javascript in one go using Mootools' Request.HTML. Since I have initialization scripts that need to be run to finish "setting up" the template, I include those in a function called pageComplete(), on every page Visiting one page to another causes the previous pageComplete() function to no longer apply, since a new one is defined. The javascript function that loads pages dynamically calls pageComplete() blindly when the AJAX call is completed and is loaded onto the page: function loadPage(page, params) { // page is a string, params is a javascript object if (pageRequest && pageRequest.isRunning) pageRequest.cancel(); pageRequest = new Request.HTML({ url: '<?=APPLICATION_LINK?>' + page, evalScripts: true, onSuccess: function(tree, elements, html) { // Empty previous content and insert new content $('content').empty(); $('content').innerHTML = html; pageComplete(); pageRequest = null; } }).send('params='+JSON.encode(params)); } So yes, if pageComplete() is not defined in one the pages, the old pageComplete() is called, which could potentially be disastrous, but as of now, every single page has pageComplete() defined, even if it is empty. Good idea, bad idea?

    Read the article

  • Personal Software Process (PSP1)

    - by gentoo_drummer
    I'm trying to figure out an exercise but it doesn't really makes to much sense.. I'm not asking someone to provide the solution. just to try and analyse what needs to be done in order to solve this. I'm trying to understand which PSP 1.0 1.1 process I should use. PROBE? Or something else? I would greatly appreciate some help on this one from someone that has experience with the Personal Software Process Methodology.. Here is the question. For the reference case (“code1.c”), the following s/w metrics are provided: man-hours spent in implementation phase (per-module): 2,7 mh/file man-hours spent in testing phase (per-module): 4,3 mh/file estimated number of bugs remaining (per-module): 0,3 errors/function, 4 errors/module (remaining) Based on the corresponding values provided for the reference case, each of the following tasks focus on some s/w metrics to be estimated for the test case (“code2.c”): [25 marks] (estimated) man-hours required in implementation phase (per-module) [8 marks] (estimated) man-hours required in testing phase (per-module) [8 marks] (estimated) number of bugs remaining at the end of testing phase (per-module) [9 marks] Tasks 4 through 6 should use the data provided for the reference case within the context of Personal Software Process level-1 (PSP-1), using them as a single-point historic data log. Specifically, the same s/w metrics are to be estimated for the test case (“code2.c”), using PSP as the basic estimation model. In order to perform the above listed tasks, students are advised to consider all phases of the PSP software development process, especially at levels PSP0 and PSP1. Both cases are to be treated as separate case-studies in the context of classic s/w development.

    Read the article

  • Are there any concrete examples of where a paralellizing compiler would provide a value-adding benefit?

    - by jamie
    Paul Graham argues that: It would be great if a startup could give us something of the old Moore's Law back, by writing software that could make a large number of CPUs look to the developer like one very fast CPU. ... The most ambitious is to try to do it automatically: to write a compiler that will parallelize our code for us. There's a name for this compiler, the sufficiently smart compiler, and it is a byword for impossibility. But is it really impossible? Can someone provide a concrete example where a paralellizing compiler would solve a pain point? Web-apps don't appear to be a problem: just run a bunch of Node processes. Real-time raytracing isn't a problem: the programmers are writing multi-threaded, SIMD assembly language quite happily (indeed, some might complain if we make it easier!). The holy grail is to be able to accelerate any program, be it MySQL, Garage Band, or Quicken. I'm looking for a middle ground: is there a real-world problem that you have experienced where a "smart-enough" compiler would have provided a real benefit, i.e that someone would pay for? A good answer is one where there is a process where the computer runs at 100% CPU on a single core for a painful period of time. That time might be 10 seconds, if the task is meant to be quick. It might be 500ms if the task is meant to be interactive. It might be 10 hours. Please describe such a problem. Really, that's all I'm looking for: candidate areas for further investigation. (Hence, raytracing is off the list because all the low-hanging fruit have been feasted upon.) I am not interested in why it cannot be done. There are a million people willing to point to the sound reasons why it cannot be done. Such answers are not useful.

    Read the article

  • How can I change a video container without re-encoding or compressing the file?

    - by GiH
    When I ripped my Kill Bill DVD I used handbrake and put it into a single avi. I realize that I didn't get the subtitles, so what I want to do is convert the AVI to MKV and put the subtitles in the mkv. How do I go about doing this without losing any qualityI don't care about compressing or anything ju? I don't care about compressing or anything, just want to change the container. If handbrake can do it, I'd prefer to use that since I already have it.

    Read the article

  • Naming standard for additional A records/IP addresses for IIS servers?

    - by serialhobbyist
    When you're adding another IP address to and IIS server, what naming standards do you use for the A records? Background: I've a bunch of sites on an IIS server which use (CNAME'd) host-headers and a single IP address. Server names (and A records) adhere to unfriendly (as in difficult-to-remember) naming standards whereas CNAMEs, and therefore host-headers, can be friendly. Now I've a need for several SSL certificates for different sites. I was thinking about using an additional IP address for each to-be-SSL'd site but still using friendly CNAMEs. So then I come to what to call the A record. What do you do? Related to this question.

    Read the article

  • Why's SMC failing on startup?

    - by Brian Knoblauch
    Trying to remove a user from one of our servers, but I seem to be thwarted at every turn... SMC refuses to load the user list (failing with a NoClassDefFoundError in the listAll method of UserContent). vipw just returns with "vipw: /etc/passwd file busy". I'm the only user on the system at the moment (it's our backup SRSS box), and both of these fail even right after a reboot. I don't have console access at the moment either unfortunately (or I would try single user mode). Of course, even if init mode S worked and let me do this one task, it doesn't solve the root problem. Ideas?

    Read the article

  • Is an Ethernet point to point connection without a switch real time capable?

    - by funksoulbrother
    In automation and control, it is commonly stated that ethernet can't be used as a bus because it is not real time capable due to packet collisions. If important control packets collide, they often can't keep the hard real time conditions needed for control. But what if I have a single point to point connection with Ethernet, no switch in between? To be more precise, I have an FPGA board with a giga-Ethernet port that is connected directly to my control PC. I think the benefits of giga Ethernet over CAN or USB for a p2p connection are huge, especially for high sampling rates and lots of data generation on the FPGA board. Am I correct that with a point to point connection there can't be any packet collisions and therefore a real time environment is given even with ethernet? Thanks in advance! ~fsb

    Read the article

  • multiple vlans routed on one nic? trunk?General? or Access?

    - by Aceth
    ok for the last week I've tried racking my head around this... I have a SRW208P with 802.1q support, and a virtual endian appliance. I would like to be able to have 3 vlans having everything routed through the endian appliance.. i.e. The Virtual server has 2 bridged NIC's to the switch. This is where I'm getting confused .. On the 8 port switch I've got the 3 vlans set up ok (all being untagged as they are not going to be vlan aware), it's the port I'm connecting the endian firewall to the switch I'm having trouble with (second nic goes to the adsl modem and NAT'd) Is it meant to be a trunk, "Genereal" or "Access" then untagged or tagged? the end goal is to have vlan traffic routing through the single NIC and have endian route vlan traffic according to the rules. Any one have any ideas on the cisco small business stuff? Thanks

    Read the article

  • multiple domains, one static IP address and latency

    - by shirish
    how is latency affected when multiple domains are using one single static IP address ? The scenario is in shared web-hosting By latency meaning the DNS lookup the client has to do. As far as I understand it, the browser would hit the root servers to try to figure out the IP Address and it belongs where and then when it comes to the correct server, it probably looks up some sort of table to determine which site names much and show that site as such via browser to the user. Is my understanding correct or backwards or what ?

    Read the article

  • How can I keep websites from knowing where I live?

    - by D Connors
    This questions is related to issues and practicality, not security. I live in Brazil and, apparently, every single website I visit knows about it. Usually that's ok, but there are quite a few sites that don't make use of that information adequately. For instance: Bing keeps thinking that brazilian pages are way more relevant to me than american ones (which they're not). Google.com always redirects me to google.com.br. Microsoft automatically sends me to horribly translated support pages in portuguese (which would just be easier to read in english). These are just a few examples. Usually it's stuff I can live with (or work around), but some of them are just plain irritating. I have geolocation disabled in firefox, so I guess they're either getting this information from my IP or from windows itself (which I bought here). Is there a way to avoid this? Either tell them nothing or make them think I live somewhere else? Thanks

    Read the article

  • Visual Studio .NET 2003 on Windows 7 hangs on search

    - by Nikhil
    So I have Visual Studio 2003 running on Windows 7 - yeah I am aware it isn't officially supported - and no, unfortunately I can't change that situation :-( For the most part it works OK but I have a specific problem, that I can't figure out. The application hangs if you do a project wide search (Ctrl - Shift - F) for a string. I have a reasonably powerful machine and all the other heavy tasks like compiling and debugging all work fine. It also works if I restrict the search to the current document (Ctrl - F). I am running it as administrator and VS.NET 2003 SP1 has been applied. The size of the project does not seem to be a problem since a colleague is also experiencing this issue for a single project solution containing 5 pages. I am currently using Windows Search as a work-around and I was wondering if there is something I missed that I should try. PS: I have asked this question on stack overflow as well - but I suspect this might be problem with Windows 7 OS - so I thought I'd cross post it here as well.

    Read the article

  • IIS 7 - Application pools

    - by vikp
    I made a CompanySite website in IIS - it's the only website on that server. I created .NET 4 Integrated pool for that website. I've installed ASP.Net site into CompanySite and it works fine. Now I'd like to install an application within this site. I create another .NET 4 application pool for that application. Then I install application into CompanySite using that application pool. As soon as setup starts off the website goes offline either with 503 or with The page cannot be displayed because an internal server error has occurred. The only way to fix this (that I have found) is to uninstall that application and restart Windows Server, not just IIS server. Question is how can I install multiple applications within a single site? THank you

    Read the article

  • Need to hookup HP dv7-3085dx with Nvidia GeForce GT 230M to my Dell 30 inch LCD 3007WFP at max resol

    - by user14660
    I recently bought an HP laptop (dv7-3085dx) (http://reviews.cnet.com/laptops/hp-pavilion-dv7-3085dx/4505-3121_7-33776108.html) which is supposed to have a pretty good video card (NVIDIA GeForce GT 230M). The card is supposed to output a max resolution of 2560x1600 which is also the max resolution of my monitor (http://www.ubergizmo.com/15/archives/2006/02/dell_3007wfp_on_dell_2001fp_action_8_megapixel_desktop.html). Now I bought an HDMI to dual link dvi (http://www.amazon.com/gp/product/B002KKLYDK/ref=oss_product) cable...this is after Best Buy's 70 dollar hdmi to dvi (perhaps it was 'single' link?) didn't give me the best resolution. In windows 7, when I try to set the max resolution for my 30 in monitor, I only get 1280x800...which is absurd. The monitor is great, I love the laptop and the video card supposedly supports such resolutions. So I can't figure out why I'm not getting better resolution (by the way, when i "detect" my monitor in windows 7, it is shown correctly as DELL 3007WFP!).

    Read the article

  • Identify SATA hard drive

    - by Rob Nicholson
    Very similar question to: Physically Identify the failed hard drive But for Windows 2003 this time. Scenario: Four identical SATA hard drives plugged into motherboard (no RAID controller here) Configured as single drive in Windows as a spanned volume One of them is starting to fail with error "The driver detected a controller error on \Device\Harddisk3" How do you cross-reference Harddisk3 to the physical SATA connection on the motherboard so you know which drive to replace? I know replacing this drive will trash the spanned array requiring it to be rebuilt anyway so my rough and ready solution is: Delete the spanned partition Create individual partitions on each drive labelled E: F: G: and H: and work out which one is Harddisk3 Power down, remove each disk one at a time, power-up until the drive letter disappears But this seems a rather crude method of identifying the drive. The SATA connectors will be numbered on the motherboard but I appreciate this might not cross-match to what Windows calls them. Thanks, Rob.

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • IIS 6 on x64 and long URLs

    - by mausch
    I have a very long URL on a site hosted on Windows 2003 x64 that looks like this: http://myhost/a_very_very_long_url_around_300_chars_long (i.e. a single, very long segment around 300 chars long) Problem is, I'm getting a 400 Bad Request response from HTTP.SYS (it doesn't even reach IIS). I can tell because these requests show up in system32\LogFiles\HTTPERR, e.g: 2009-09-17 19:51:29 200.123.179.9 3636 192.168.129.50 80 HTTP/1.1 GET /a_very_very_long_url_around_300_chars_long 400 - URL - I tried setting UrlSegmentMaxLength in the registry and this fixes the issue on my Windows 2003 x86 box but not on the x64 production server. I tried this on another Win2k3 x64 server and it also failed. Any hints?

    Read the article

  • Unset the system immutable bit in Mac OS X

    - by skylarking
    In theory I believe you can unlock and remove the system immutable bit with: chflags noschg /Path/To/File But how can you do this when you've set the bit as root? I have a file that is locked, and even running this command as root will not work as the operation is not permitted. I tried logging in as Single-User mode to no avail. I seem to remember that even though you are in as root you are in at level '1'. And to be able to remove the system-immutable flag you need to be logged in at level '0'. Does this have something to do with this issue?

    Read the article

  • Exchange 2010 install locks out high level accounts

    - by tearman
    Basically, when we installed Exchange 2010 alongside our Exchange 2003 server (we assume), this is what caused our problem. The Exchange 2010 server is not active, just running on the domain. What's actually going on is that user groups like Enterprise Admins are getting a single deny flag on Full Control over mailboxes currently residing on the Exchange 2003 server which is preventing any of us from making changes. It says these permissions are inherited from the Parent Object, but we have no idea which one that is. Any idea on how to go about fixing this?

    Read the article

  • Need help completing this Powershell script with some Exchange 2010 commands.

    - by Pure.Krome
    Hi folks. the following powershell script lists all the email aliases I have for a single mailbox. >$mbx = Get-Mailbox myuser >$mbx.EmailAddresses and that lists all the addresses. eg. SmtpAddress : [email protected] AddressString : [email protected] ProxyAddressString : smtp:[email protected] Prefix : SMTP IsPrimaryAddress : False PrefixString : smtp SmtpAddress : [email protected] AddressString : [email protected] ProxyAddressString : smtp:[email protected] Prefix : SMTP IsPrimaryAddress : False PrefixString : smtp SmtpAddress : [email protected] AddressString : [email protected] ProxyAddressString : SMTP:[email protected] Prefix : SMTP IsPrimaryAddress : True PrefixString : SMTP Now to add a new email address, I do the following poweshell command :- $mbx.EmailAddresses += "myEmailAddress.com" $mbx | Set-Mailbox So i'm not sure how i can use the foreach to remove each address? I tried:- @mbx.EmailAddresses | foreach { $mbx.EmailAddresses -= $._SmtpAddress } and that failed miserably. That's my first attempt of PS script, ever :P Can anyone help?

    Read the article

  • An Oracle's Interns Story by Samarth Varshney

    - by user769227
    I have written a short write-up about my experience at Oracle and am attaching some pics along:  I joined Oracle on 5th January 2011 as part of my internship program in BITS Pilani Goa Campus. In the short period of six months, I had the most beautiful and interesting time of my life. It was fun to work in Oracle, thanks to the whole team. I had an excellent manager, simple and sophisticated, who gave me the utmost enthusiasm to work. I gained a lot of knowledge during my internship, thanks to my colleagues. They were very helpful and motivated us (interns) in every possible way. In the initial stages of work, in which you know almost nothing, they helped me gain knowledge at a rapid speed. Thanks to the vast database of study material at the Oracle site, that I could start on with my project in a very short time.  For me, the time flew like anything and made the 6 months look like a few days. It was probably due to the team, that the work was so much fun. We had our deadlines but had full freedom as to how to work and when to work. I don't remember a single instance, in which I was working and not listening to songs. I mean it will always be a time to remember. I hope to join this company and make this time last forever.  Samarth 

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • Excel: Conditional Formatting (Highlighting) Values Based on Another Worksheet

    - by ScottSEA
    I have a workbook that has two worksheets. The first worksheet is simply a list of the first 78,498 prime numbers in a single column, A1-A78498. The second worksheet has a grid of numbers from 1 to n. The goal is to highlight the cells with prime numbers in the grid by referencing the prime number values in the other worksheet. Is this possible, and if so, how? edit I have named the column with my prime numbers "PRIMES1T". I would like the formula to work for the entire worksheet, regardless of size, but my excel-fu is extremely weak. If at all possible, I would like to be able to enter the formula in the dialog box for conditional formatting (as below): I have tried =NOT(ISNA(MATCH(A:Z,PRIMES1T,0) (only A-Z, but have to start somewhere) with no luck.

    Read the article

  • Remote Debugging

    - by Pemdas
    Obviously, the easiest way to solve a bug is to be able to reproduce it in house. However, sometimes that is not practical. For starters, users are often not very good at providing you with useful information. Costumer Service: "what seems to be the issue?" User: "It crashed!" To further compound that, sometimes the bug only occurs under certain environmentally conditions that can not be adequately replicated in house. With that in mind, it is important to build some sort of diagnostic framework into your product. What type of solutions have you seen or used in your experience? Logging seems to be the predominate method, which makes sense. We have a fairly sophisticated logging frame work in place with different levels of verbosity and the ability to filter on specific modules (actually we can filter down to the granularity of a single file). Error logs are placed strategically to manufacture a pretty good representation of a stack trace when an error occurs. We don't have the luxury of 10 million terabytes of disk space since I work on embedded platforms, so we have two ways of getting them off the system: a serial port and a syslog server. However, an issue we run into sometimes is actually getting the user to turn the logs on. Our current framework often requires some user interaction.

    Read the article

  • Using EC2 instance as main development platform

    - by David
    My problem I am working as a consultant for various companies. Each company provides me with a laptop where with their software on and I also have my own where I have my development environment. I tend to buy a new laptop every second year and find myself spending lots of time configuring and installing software. I also sometimes spend a lot of time waiting for my laptop to process things. To solve all these issues, I am now considering using EC2 (running windows instances) as my main development platform and just access this from any PC I happen to be at. I calculated that running the High-CPU On-Demand Instances (medium) for 8 hours a day for a year costs me 580$, which is acceptable. I imagine that when I approach the workplace each day, I will make a single click my phone to fire up the instance, so it is ready when I get to work. I should have different icons on my phone to fire up the various instance types. The same software should of course automatically be loaded on the various hardware (sometimes I would even need their instance with 68.4 GB of memory). Another advantage is that if I am having a specific problem with my instance, I could fire up another instance and have someone look into the problem and update the image. My question: Does anyone have experience with such a setup on EC2? What kind of problems do you forsee?

    Read the article

< Previous Page | 484 485 486 487 488 489 490 491 492 493 494 495  | Next Page >